The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
动量是提高梯度下降期间收敛速率的流行技术。在这项研究中,我们试验为训练隐藏的马尔可夫模型的Baum-Welch期望最大化算法增加动量。我们比较了在英语文本和恶意软件操作码数据上训练有素的离散隐藏马尔可夫模型。动量的有效性是通过测量模型得分和分类精度的变化来确定的。我们的广泛实验表明,在HMM训练期间,在Baum-Welch中添加动量可以减少初始收敛所需的迭代次数,尤其是在模型缓慢收敛的情况下。但是,动量似乎并不能以大量迭代次数改善最终模型性能。
translated by 谷歌翻译
我们探索粒状介质(GM)中软机器的运动,由细长杆的弹性变形产生。提出了由细菌的生理结构的低成本,迅速制造的机器人。它由刚性头部,带有电动机和电池的嵌入式和电池,以及多个弹性杆(我们的灯泡模型)来调查通用汽车的运动。弹性鞭毛在电机一端旋转,它们由于从GM的拖动而变形,推动机器人。外部拖动由鞭毛形状决定,而后者由于外部负载和弹力之间的竞争而改变。在该耦合的流体结构相互作用问题中,我们观察到增加鞭毛的数量可以减小或增加机器人的推进速度,这取决于系统的物理参数。这种简单机器人之间的功能关系中的这种非线性激励我们利用理论,数值模拟和实验来从根本上分析其力学。我们提出了一个简单的欧拉伯努利光束理论的分析框架,其能够定性地捕获这两种情况。当鞭毛变形小时,理论预测定量匹配实验。为了考虑经常在软机器人和微生物中遇到的几何非线性变形,我们实施了一种仿真框架,该框架包括弹性杆的离散微分几何形状模拟,这是一种基于电阻理论的拖曳模型,以及用于流体动力学的改进的斯托克斯法机器人头。与实验数据的比较表明模拟可以定量地预测机器人运动。总的来说,本文中提出的理论和数值工具可以在粒状或流体介质中的这类清晰的机器人的设计和控制来阐明。
translated by 谷歌翻译
联合机器学习是一种用于训练多个设备模型的技术,而无需在它们之间交换数据。因为数据仍然是每个计算节点的本地,所以联合学习非常适合在仔细控制数据的字段中的使用情况,例如医学,或者具有带宽约束的域。这种方法的一个弱点是大多数联合学习工具依赖于中央服务器来执行工作负载委派并生成单个共享模型。在这里,我们建议一个灵活的框架,用于分散联合学习模式,并提供与Pytorch兼容的开源,参考实现。
translated by 谷歌翻译
杂散的相关性允许灵活的模型在培训期间预测很好,但在相关的测试人群中仍然很差。最近的工作表明,满足涉及相关诱导\ exuritiT {Nuisance}变量的特定独立性的模型在其测试性能上保证了。执行此类独立性需要在培训期间观察到滋扰。然而,滋扰,例如人口统计或图像背景标签通常丢失。在观察到的数据上实施独立并不意味着整个人口的独立性。在这里,我们派生{MMD}估计用于缺失滋扰下的不变性目标。在仿真和临床数据上,通过这些估计优化实现测试性能类似于使用完整数据的估算器。
translated by 谷歌翻译
虽然我们注意临床自然语言处理(NLP)的最新进展,但我们可以注意到临床和翻译研究界的一些抵抗,因为透明度,可解释性和可用性有限,采用NLP模型。在这项研究中,我们提出了一种开放的自然语言处理开发框架。我们通过实施NLP算法为国家Covid队列协作(N3C)进行了评估。基于Covid-19相关临床笔记的信息提取的利益,我们的工作包括1)使用Covid-19标志和症状作为用例的开放数据注释过程,2)一个社区驱动的规则集合平台,3)合成文本数据生成工作流程,用于生成信息提取任务的文本而不涉及人为受试者。 Corpora来自来自三个不同机构的文本(Mayo Clinic,肯塔基州大学,明尼苏达大学)。用单个机构(Mayo)规则集进行了金标准注释。这导致了0.876,0.706和0.694的F-Scors分别用于Mayo,Minnesota和肯塔基测试数据集。作为N3C NLP子群体的联盟努力的研究表明,创建联邦NLP算法开发和基准测试平台的可行性,以增强多机构临床NLP研究和采用。虽然我们在这项工作中使用Covid-19作为用例,但我们的框架足以适用于临床NLP的其他兴趣领域。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
A Digital Twin (DT) is a simulation of a physical system that provides information to make decisions that add economic, social or commercial value. The behaviour of a physical system changes over time, a DT must therefore be continually updated with data from the physical systems to reflect its changing behaviour. For resource-constrained systems, updating a DT is non-trivial because of challenges such as on-board learning and the off-board data transfer. This paper presents a framework for updating data-driven DTs of resource-constrained systems geared towards system health monitoring. The proposed solution consists of: (1) an on-board system running a light-weight DT allowing the prioritisation and parsimonious transfer of data generated by the physical system; and (2) off-board robust updating of the DT and detection of anomalous behaviours. Two case studies are considered using a production gas turbine engine system to demonstrate the digital representation accuracy for real-world, time-varying physical systems.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译